library(climatic4economist)3: Compute the Standardize Precipitation Index based on spatial point.
1 Introduction
In this tutorial, we extract the CHIRPS precipitation observations in Suriname, based on the centroids of the Principal Sampling Units (PSU) of the Suriname Survey of Living Conditions for the years 2016/17 and 2022. Based on the extracted precipitation, the tutorial shows how to compute the Standardized Precipitation Index (SPI) and merge it with the survey based on the date of interviews.
2 Code
2.1 Set Up
We start by setting up the stage for our analysis.
First, we load the necessary packages. We load only climatic4economist package that contains several functions meant to extract and merge spatial variables with surveys. During the tutorial we will use other packages but instead of loading all the package at the begging we will call specific function each time.
In the setup, we also want to create the paths to the various data sources and load the necessary functions for extraction. Note .. means one step back to the folder directory, i.e. one folder back.
Note that how to set up the paths depends on your folder organization but there are overall two approaches:
- you can use the
R project, by opening the project directly you don’t need to set up the path to the project. Automatically the project figures out on its own where it is located in the computer and set that path as working folder. - you can manually set the working folder with the function
setwd().
# path to data folder
path_to_data <- file.path("..",
"..", "data")
# survey
path_to_wave_1 <- file.path(path_to_data, "survey", "surname", "wave 1",
"RT001_Public.dta")
path_to_wave_2 <- file.path(path_to_data, "survey", "surname", "wave 2",
"2022 RT001_Housing_plus.dta")
# administrative division
path_to_adm_div <- file.path(path_to_data, "adm_div", "GAUL")
# weather variables
path_to_pre_monthly <- file.path(path_to_data, "weather", "CHIRPS", "monthly",
"chirps-v2.0.monthly.nc")
# to result folder
path_to_result <- file.path(path_to_data, "result")- 1
- concatenate the string to make a path
- 2
-
..means one folder back
2.2 Read the data
2.2.1 Survey
We begin by reading the surveys, which in this case consist of two waves with potentially different locations. As a result, we need to load both waves. The waves are stored as dta files, so we use the haven::read_dta() function to read them.
We only need the hhid, the survey coordinates, and the interview dates. We use dplyr::select() to choose these variables. This passage is optional and we bring with us all the variables, but we won’t use them. Note that the first wave does not include the interview date.
We combine the two waves using dplyr::bind_rows().
We can use the head() function to preview the data and see how it looks.
wave_1 <- haven::read_dta(path_to_wave_1) |>
dplyr::select(hhid, lat_cen, long_cen) |>
dplyr::mutate(wave = 1)
wave_2 <- haven::read_dta(path_to_wave_2) |>
dplyr::select(hhid, end_date_n, lat_cen, long_cen) |>
dplyr::mutate(wave = 2)
survey <- dplyr::bind_rows(wave_1, wave_2)
head(survey)# A tibble: 6 × 5
hhid lat_cen long_cen wave end_date_n
<dbl> <dbl> <dbl> <dbl> <date>
1 1010031 5.82 -55.2 1 NA
2 1010041 5.82 -55.2 1 NA
3 1010051 5.82 -55.2 1 NA
4 1010061 5.82 -55.2 1 NA
5 1010121 5.82 -55.2 1 NA
6 1010131 5.82 -55.2 1 NA
2.2.2 Aministrative Divisions
We read the spatial file containing the national borders of Suriname, we use read_GAUL() to load it. By printing the spatial data, we can obtain key information, such as the dimensions (number of rows and variables), the geometry (which indicates the type of spatial object), and the coordinate reference system (CRS), which links the coordinates to precise locations on the Earth’s surface. The CRS is particularly important when working with different spatial datasets, as mismatched CRSs can prevent the datasets from aligning correctly.
adm_div <- read_GAUL(path_to_adm_div, iso = "SUR", lvl = 2)
adm_div class : SpatVector
geometry : polygons
dimensions : 62, 4 (geometries, attributes)
extent : -58.04987, -53.97982, 1.83114, 6.004546 (xmin, xmax, ymin, ymax)
source : GAUL-SUR-ADM2.geojson
coord. ref. : lon/lat WGS 84 (EPSG:4326)
names : ID_adm_div iso adm_div_1 adm_div_2
type : <chr> <chr> <chr> <chr>
values : 1 SUR Brokopondo Brownsweg
2 SUR Brokopondo Centrum
3 SUR Brokopondo Klaaskreek
2.2.3 Weather
Finally, we load the precipitation data. Climatic data typically comes in the form of raster data. A raster represents a two-dimensional image as a rectangular matrix or grid of pixels. These are spatial rasters because they are georeferenced, meaning each pixel (or “cell” in GIS terms) represents a square region of geographic space. The value of each cell reflects a measurable property (either qualitative or quantitative) of that region. In this case, the values are monthly precipitation that fell in that region. We use the function terra::rast() to load the raster data.
This particular raster has global coverage, so we crop it to focus on the country area to reduce its size. Although this step is not strictly necessary, it helps decrease the memory load and makes visualizations more manageable. We use the function crop_with_buffer() for this purpose.
When we print the raster, we obtain several key details. The dimension tells us how many cells the raster consists of and the number of layers, each layer corresponds to a particular months for which the observations were made. We also get the spatial resolution, which defines the size of each square region in geographic space, and the coordinate reference system (CRS). Given the importance of the CRS, we extract it using terra::crs() and save it for later use.
We also rename the raster layers to reflect the corresponding dates for each layer, as this is useful if we want to track the dates. We use terra::time() to extract the dates.
Note that rasters can store time information in different ways, so it may not always be possible to retrieve dates in this manner. A common alternative is for dates to be embedded in the layer names, in which case we wouldn’t need to rename the layers.
weather_monthly <- terra::rast(path_to_pre_monthly) |>
crop_with_buffer(adm_div)
weather_monthlyclass : SpatRaster
dimensions : 83, 81, 527 (nrow, ncol, nlyr)
resolution : 0.05, 0.05 (x, y)
extent : -58.05, -54, 1.85, 6 (xmin, xmax, ymin, ymax)
coord. ref. : +proj=longlat +datum=WGS84 +no_defs
source(s) : memory
varname : precip (Climate Hazards group InfraRed Precipitation with Stations)
names : precip_1, precip_2, precip_3, precip_4, precip_5, precip_6, ...
min values : 36.21083, 49.09035, 34.04647, 153.5361, 103.0583, 90.83206, ...
max values : 403.70178, 653.45990, 345.78024, 622.1295, 795.1603, 1034.43372, ...
unit : mm/month, mm/month, mm/month, mm/month, mm/month, mm/month, ...
time (days) : 1981-01-01 to 2024-11-01
names(weather_monthly) <- terra::time(weather_monthly)
weather_monthlyclass : SpatRaster
dimensions : 83, 81, 527 (nrow, ncol, nlyr)
resolution : 0.05, 0.05 (x, y)
extent : -58.05, -54, 1.85, 6 (xmin, xmax, ymin, ymax)
coord. ref. : +proj=longlat +datum=WGS84 +no_defs
source(s) : memory
varname : precip (Climate Hazards group InfraRed Precipitation with Stations)
names : 1981-01-01, 1981-02-01, 1981-03-01, 1981-04-01, 1981-05-01, 1981-06-01, ...
min values : 36.21083, 49.09035, 34.04647, 153.5361, 103.0583, 90.83206, ...
max values : 403.70178, 653.45990, 345.78024, 622.1295, 795.1603, 1034.43372, ...
unit : mm/month, mm/month, mm/month, mm/month, mm/month, mm/month, ...
time (days) : 1981-01-01 to 2024-11-01
2.3 Georeference the survey
As we’ve mentioned, the weather data is georeferenced, so we need to ensure the same for the survey data. Since many households share the same coordinates, they are linked to the same weather events. To reduce computation time, we extract data only for the unique coordinates, rather than for each household. Moreover, we must ensure that we can later associate the correct weather data with the right household, we do this by creating an merging variable called ID.
This is handled by the prepare_coord() function, which requires the coordinates’ variable names as input.
We can print the result to check the transformation. The new column, ID, is created by prepare_coord() and identifies each unique coordinate. This is used to merge the weather data with the household data.
srvy_coord <- prepare_coord(survey, lat_var = lat_cen, lon_var = long_cen)
srvy_coord# A tibble: 4,535 × 6
ID hhid lat_cen long_cen wave end_date_n
<chr> <dbl> <dbl> <dbl> <dbl> <date>
1 1 23903250101 3.85 -54.2 2 2022-12-06
2 1 23903250201 3.85 -54.2 2 2022-12-06
3 1 23903250301 3.85 -54.2 2 2022-12-06
4 1 23903250401 3.85 -54.2 2 2022-12-07
5 1 23903252201 3.85 -54.2 2 2022-12-07
6 1 23903252301 3.85 -54.2 2 2022-12-06
7 1 23903252501 3.85 -54.2 2 2022-12-06
8 1 23903252601 3.85 -54.2 2 2022-12-06
9 1 23903253301 3.85 -54.2 2 2022-12-07
10 1 23903253501 3.85 -54.2 2 2022-12-06
# ℹ 4,525 more rows
Once we have the unique coordinates, we are ready to transform them into spatial points using the georef_coord() function. When performing this transformation, it’s crucial to set the correct CRS, which must match that of the weather data. The CRS is provided as an argument of the function, using the previously saved CRS from the weather data. Also the georef_coord() function requires the coordinates’ variable names as input.
srvy_geo <- georef_coord(srvy_coord,
geom = c("long_cen", "lat_cen"),
crs = "EPSG:4326")
srvy_geo class : SpatVector
geometry : points
dimensions : 688, 1 (geometries, attributes)
extent : -57.05207, -54.05524, 3.849064, 5.943465 (xmin, xmax, ymin, ymax)
coord. ref. : lon/lat WGS 84 (EPSG:4326)
names : ID
type : <chr>
values : 1
2
3
Note how there are 688 rows. These are the unique locations from the survey.
2.4 Plot
A good practice when working with spatial data is to plot it. This is the best way to verify that everything is working as expected.
First, we plot the survey coordinates to ensure they are correctly located within the country and to examine their spatial distribution.
terra::plot(adm_div, col = "grey", main = "Suriname and PSU centroids")
terra::points(srvy_geo, col = "gold", alpha = 0.5, cex = 0.5)We confirm that the survey locations are within the country borders, which is great! We also observe that the spatial distribution of survey coordinates is neither random nor uniform; most are concentrated near the capital and along the coast.
Next, we plot a layer of the precipitation data to see how it overlaps with the spatial coordinates.
terra::plot(weather_monthly, "2024-10-01", col = terra::map.pal("water"),
main = "Monthly precipitation at 2024-10 and survey location")
terra::lines(adm_div, col = "white", lwd = 2)
terra::points(srvy_geo, col = "red", alpha = 0.5, cex = 0.5)Once again, the survey coordinates align with the precipitation data, which is great! We can also observe the high spatial resolution of the CHIRPS dataset. However, despite this high resolution, some survey coordinates still fall within the same cell.
2.5 Extract
Next, we extract the weather data based on the survey coordinates using the extract_by_coord() function. This function requires the raster with the weather data and the georeferenced coordinates as inputs.
Looking at the result, we see first the ID column, that identifies the unique survey coordinates. The second and third column are the coordinates of the cells. The other columns contain the weather observations over time specific to each coordinate.
pre_coord <- extract_by_coord(raster = weather_monthly,
coord = srvy_geo)
pre_coord# A tibble: 688 × 530
ID x_cell y_cell X1981_01_01 X1981_02_01 X1981_03_01 X1981_04_01
<chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 1 -54.2 3.82 255. 260. 133. 359.
2 2 -55.5 3.92 225. 249. 87.2 380.
3 3 -55.5 3.97 184. 239. 86.9 384.
4 4 -55.5 4.02 186. 251. 86.3 368.
5 5 -55.5 4.02 186. 251. 86.3 368.
6 6 -55.4 4.02 167. 265. 86.6 379.
7 7 -54.8 4.07 140. 261. 107. 347.
8 8 -55.4 4.12 186. 257. 96.2 347.
9 9 -55.4 4.17 210. 228. 87.0 376.
10 10 -55.2 4.17 213. 260. 90.0 339.
# ℹ 678 more rows
# ℹ 523 more variables: X1981_05_01 <dbl>, X1981_06_01 <dbl>,
# X1981_07_01 <dbl>, X1981_08_01 <dbl>, X1981_09_01 <dbl>, X1981_10_01 <dbl>,
# X1981_11_01 <dbl>, X1981_12_01 <dbl>, X1982_01_01 <dbl>, X1982_02_01 <dbl>,
# X1982_03_01 <dbl>, X1982_04_01 <dbl>, X1982_05_01 <dbl>, X1982_06_01 <dbl>,
# X1982_07_01 <dbl>, X1982_08_01 <dbl>, X1982_09_01 <dbl>, X1982_10_01 <dbl>,
# X1982_11_01 <dbl>, X1982_12_01 <dbl>, X1983_01_01 <dbl>, …
Again we have a row for each unique location from the survey. However, if we want to know how many different cells there are we can look unique cell coordinates.
unique_cell <- pre_coord |>
dplyr::distinct(x_cell, y_cell)
nrow(unique_cell)[1] 107
We see that now the number of rows is 107, this is the actual different weather observation that we can merge with the survey.
2.6 Compute the SPI
We now compute the SPI with the function compute_spi(). This function requires the precipitation time series for each location and the time scale at which the SPI is computed.
To compute the SPI, it is recommended to use at least 30 years of observation to ensure a good estimation of the parameters. More years can strength the estimation but the results can be affected by climate change: if there have been a change in the climate parameters, old observations might be not indicative of the current situation affecting the estimation. There are no clear rule on this, so we leave add the possibility to select the time range of observation with the function select_by_dates(). The function requires both or just one between the starting date, from, and the end date to. If both are provide the the function select between the two dates, if only from is provided the function selects all date after, and if only to is provided the function selects all date before.
Looking at the result, we see first is the ID column, that we will use to merge back with the survey. The other columns contain the SPI observations over time specific to each coordinate.
# coord <- select_by_dates(coord, from = "1991-01-01", to = "2023-01-01")
spi3 <- compute_spi(pre_coord, time_scale = 3)
spi3# A tibble: 688 × 530
ID x_cell y_cell X1981_01_01 X1981_02_01 X1981_03_01 X1981_04_01
<chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 1 -54.2 3.82 NA NA -0.0896 0.333
2 10 -55.2 4.17 NA NA -0.173 0.208
3 100 -55.2 5.67 NA NA 0.124 0.624
4 101 -55.4 5.72 NA NA 0.0581 0.918
5 102 -55.2 5.72 NA NA 0.0618 0.843
6 103 -55.2 5.72 NA NA 0.0618 0.843
7 104 -55.2 5.72 NA NA 0.0618 0.843
8 105 -55.2 5.72 NA NA 0.0618 0.843
9 106 -55.1 5.72 NA NA 0.121 0.994
10 107 -55.1 5.72 NA NA 0.121 0.994
# ℹ 678 more rows
# ℹ 523 more variables: X1981_05_01 <dbl>, X1981_06_01 <dbl>,
# X1981_07_01 <dbl>, X1981_08_01 <dbl>, X1981_09_01 <dbl>, X1981_10_01 <dbl>,
# X1981_11_01 <dbl>, X1981_12_01 <dbl>, X1982_01_01 <dbl>, X1982_02_01 <dbl>,
# X1982_03_01 <dbl>, X1982_04_01 <dbl>, X1982_05_01 <dbl>, X1982_06_01 <dbl>,
# X1982_07_01 <dbl>, X1982_08_01 <dbl>, X1982_09_01 <dbl>, X1982_10_01 <dbl>,
# X1982_11_01 <dbl>, X1982_12_01 <dbl>, X1983_01_01 <dbl>, …
If we want to calculate the SPI with time scale equal to one, we just need to change the time_scale argument.
spi1 <- compute_spi(pre_coord, time_scale = 1)
spi1# A tibble: 688 × 530
ID x_cell y_cell X1981_01_01 X1981_02_01 X1981_03_01 X1981_04_01
<chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 1 -54.2 3.82 0.115 0.543 -0.801 0.913
2 10 -55.2 4.17 -0.0471 0.737 -1.06 0.688
3 100 -55.2 5.67 0.0636 0.592 -0.170 0.838
4 101 -55.4 5.72 -0.722 1.04 0.136 0.755
5 102 -55.2 5.72 -0.478 1.06 -0.131 0.755
6 103 -55.2 5.72 -0.478 1.06 -0.131 0.755
7 104 -55.2 5.72 -0.478 1.06 -0.131 0.755
8 105 -55.2 5.72 -0.478 1.06 -0.131 0.755
9 106 -55.1 5.72 -0.494 1.24 -0.135 0.853
10 107 -55.1 5.72 -0.494 1.24 -0.135 0.853
# ℹ 678 more rows
# ℹ 523 more variables: X1981_05_01 <dbl>, X1981_06_01 <dbl>,
# X1981_07_01 <dbl>, X1981_08_01 <dbl>, X1981_09_01 <dbl>, X1981_10_01 <dbl>,
# X1981_11_01 <dbl>, X1981_12_01 <dbl>, X1982_01_01 <dbl>, X1982_02_01 <dbl>,
# X1982_03_01 <dbl>, X1982_04_01 <dbl>, X1982_05_01 <dbl>, X1982_06_01 <dbl>,
# X1982_07_01 <dbl>, X1982_08_01 <dbl>, X1982_09_01 <dbl>, X1982_10_01 <dbl>,
# X1982_11_01 <dbl>, X1982_12_01 <dbl>, X1983_01_01 <dbl>, …
2.7 Merge with survey
Now, we combine the extracted weather data with the survey data using ID as the key matching variable.
spi3_survey <- merge_with_survey(srvy_coord, spi3)
spi3_survey# A tibble: 4,535 × 535
ID hhid lat_cen long_cen wave end_date_n x_cell y_cell X1981_01_01
<chr> <dbl> <dbl> <dbl> <dbl> <date> <dbl> <dbl> <dbl>
1 1 23903250101 3.85 -54.2 2 2022-12-06 -54.2 3.82 NA
2 1 23903250201 3.85 -54.2 2 2022-12-06 -54.2 3.82 NA
3 1 23903250301 3.85 -54.2 2 2022-12-06 -54.2 3.82 NA
4 1 23903250401 3.85 -54.2 2 2022-12-07 -54.2 3.82 NA
5 1 23903252201 3.85 -54.2 2 2022-12-07 -54.2 3.82 NA
6 1 23903252301 3.85 -54.2 2 2022-12-06 -54.2 3.82 NA
7 1 23903252501 3.85 -54.2 2 2022-12-06 -54.2 3.82 NA
8 1 23903252601 3.85 -54.2 2 2022-12-06 -54.2 3.82 NA
9 1 23903253301 3.85 -54.2 2 2022-12-07 -54.2 3.82 NA
10 1 23903253501 3.85 -54.2 2 2022-12-06 -54.2 3.82 NA
# ℹ 4,525 more rows
# ℹ 526 more variables: X1981_02_01 <dbl>, X1981_03_01 <dbl>,
# X1981_04_01 <dbl>, X1981_05_01 <dbl>, X1981_06_01 <dbl>, X1981_07_01 <dbl>,
# X1981_08_01 <dbl>, X1981_09_01 <dbl>, X1981_10_01 <dbl>, X1981_11_01 <dbl>,
# X1981_12_01 <dbl>, X1982_01_01 <dbl>, X1982_02_01 <dbl>, X1982_03_01 <dbl>,
# X1982_04_01 <dbl>, X1982_05_01 <dbl>, X1982_06_01 <dbl>, X1982_07_01 <dbl>,
# X1982_08_01 <dbl>, X1982_09_01 <dbl>, X1982_10_01 <dbl>, …
We are back at 4535, which matches the original survey.
2.8 Select based on the interview
Now that we have merged the SPI values with the survey, we can select just the relevant observations.
If we want to select just a subsets of observations we can use the select_by_dates() function. If we want to select based on the date of interview of the survey, we can use select_by_interview(). This last function requires the variable that contains the dates of interview and the interval to select based on the dates. The interval must be express in number of months or in number years. The wide argument specifies how the output should be reported, in wide with each time observation as separate columns, or long, with all observation in one column.
Note that current version of the
select_by_interview()functions drops the observations with missing date of interview.
What is relevant depends on the particular application, but we can agree that we don’t want to assign weather observations that happened after the interviews, as these cannot influence the answers.
In this tutorial we select the 12 months before the interviews using the function select_by_interview(). The argument interview select the variable containing the date of interviews, and the argument interval defines how back in time the function needs to select the observations.
If there are missing observations for the date of interviews, the function warns us that these observations are dropped.
spi3_hh <- select_by_interview(spi3_survey,
interview = end_date_n,
interval = "1 year",
wide = TRUE)Missing interview are dropped!
2.9 Save
The final step of the code is to save the result. In this case, we save it as a dta file using the haven::write_dta() function.
haven::write_dta(spi1_survey, file.path(path_to_result, "spi_1.dta"))
haven::write_dta(spi3_hh, file.path(path_to_result, "spi_3.dta"))3 Take home messages
When working with multiple spatial data:
- remember to control the Coordinate Reference System of all dataset
- plot the data to check everything is going well
based on the typology of data we use different function to read them
- for
dtausehaven::read_dta() orandhaven::_write_dta() - for spatial vectors
read_GAUL()for administrative divisions in specific country and level, otherwiseterra::vect() - for spatial raster use
terra::rast()
- for
Extraction
- since many household share the same locations, we take advantage of this by extracting the weather data only at the unique locations, we achieve this with the function
prepare_coord(). - same of these unique locations may fall within the same value cells, so the actual information might be even lower.
- since many household share the same locations, we take advantage of this by extracting the weather data only at the unique locations, we achieve this with the function
4 Appendix
4.1 Want to know about the data?
4.1.1 Precipitation
Monthly and daily precipitation from Climate Hazards Group InfraRed Precipitation with Station data (CHIRPS)1 is a 35+ year quasi-global rainfall data set. Spanning 50°S-50°N (and all longitudes) and ranging from 1981 to near-present, CHIRPS incorporates in-house climatology, CHPclim, 0.05° resolution satellite imagery, and in-situ station data to create gridded rainfall time series for trend analysis and seasonal drought monitoring.
Data can be downloaded from here while extra information are available here.
| feature | value |
|---|---|
| spatial resolution | 0.05 x 0.05 (~ 5 km) |
| temporal resolution | monthly or daily |
| temporal frame | 1981 - near present |
| unit of measure | mm/month or mm/day |
4.1.2 Surveys
Suriname Survey of Living Conditions. The 2022 Suriname Survey of Living Conditions is a joint survey made by The Inter-American Development Bank (IDB) and the World Bank. The 2022 Suriname Survey of Living Conditions - administered to a nationally representative sample, which included 7,713 individuals from 2,540 households - was developed to support poverty analysis as well as policy planning and is a helpful tool for policy makers to facilitate fact-based decision making. The survey’s design and execution were financed by the IDB, while the World Bank and IDB are joining forces to analyze data and produce initial findings.
The Suriname Survey of Living Conditions (SSLC) 2016/17 is an effort of the Inter-American Development Bank (IDB) with the support of the EnergieBedrijvan Suriname’s (state-owned electrical company of Suriname) and the Central Bank of Suriname. It visited about 2,000 households from October 2016 through September 2017 and collected data on the most important dimensions of welfare, which will support evidence-based policy making in areas such as education, health, housing, employment and poverty alleviation. The survey also gathered information on the consumption patterns, income and expenditures of the Surinamese households, intended to update the Consumption Price Index basket and inform the System of National Accounts.
4.1.3 Adminitrative boundaries
The Global Administrative Unit Layers (GAUL) 2024 is a vector dataset compiling the most recent available information on administrative boundaries from multiple sources, produced by FAO from 2022 to 2024 in the framework of the Hand-in-Hand Initiative and the Geospatial Data Platform activities.
The GAUL 2024 aims at maintaining global layers with a unified coding system at country, first (e.g. departments), and second (e.g. districts) administrative levels. The data, sourced and processed from the United Nations Second Administrative Level Boundaries (UN-SALB) programme and from other relevant data sources, was complemented by the UN-FAO-CSI AgroInformatics Geospatial Analysis team with data from official geospatial data producers. Country boundaries were processed against UN official recognized borders (UN-map 2018), and the administrative subdivisions were checked for geometry and topology, then corrected and validated. The administrative boundaries dataset at level 1 and 2 (Sub-national level) is part of the Global Administrative Unit Layers (GAUL) dataset series which includes information on administrative units for all the countries in the world, providing a contribution to the standardization of the spatial dataset representing administrative units. The administrative boundaries at the level 1 dataset distinguishes States, Provinces, Departments and equivalent. The administrative boundaries at the level 2 dataset distinguishes Districts and equivalent.
Suggested citation:
- FAO. 2024. Global Administrative Unit Layers (GAUL). [Accessed on [DD Month YYYY]]. https://data.apps.fao.org/?lang=en. Licence: CC-BY-4.0
It is possible to find additional information:
The data can be freely download from
- here.
Footnotes
Funk, C.C., Peterson, P.J., Landsfeld, M.F., Pedreros, D.H., Verdin, J.P., Rowland, J.D., Romero, B.E., Husak, G.J., Michaelsen, J.C., and Verdin, A.P., 2014, A quasi-global precipitation time series for drought monitoring: U.S. Geological Survey Data Series 832, 4 p. http://pubs.usgs.gov/ds/832/↩︎